8 research outputs found

    HINT: Supporting Congestion Control Decisions with P4-driven In-Band Network Telemetry

    Get PDF
    Years of research on congestion controls have highlighted how end-to-end and in-network protocols might perform poorly in some contexts. Recent advances in data plane network programmability could also bring advantages in transport protocols, enabling mining and processing in-network congestion signals. However, the new machine learning-based congestion control class has only partially used data from the network, favoring a more sophisticated model design but neglecting possibly precious pieces of data. In this paper, we present HINT, an in-band network telemetry architecture designed to provide insights into network congestion to the end-host TCP algorithm during the learning process. In particular, the key idea is to adapt switches’ behavior via P4 and instruct them to insert simple device information, such as processing delay and queue occupancy, directly into transferred packets. Initial experimental results show that this approach comes with a little network overhead but can improve the visibility and, consequently, the accuracy of TCP decisions of the end-host. At the same time, the programmability of both switches and hosts also enables customization of the default behavior as the user’s needs change

    P4FL: An Architecture for Federating Learning with In-Network Processing

    Get PDF
    The unceasing development of Artificial Intelligence (AI) and Machine Learning (ML) techniques is growing with privacy problems related to the training data. A relatively recent approach to partially cope with such concerns is Federated Learning (FL), a technique in which only the parameters of the trained neural network models are transferred rather than data. Despite the benefits that FL may provide, such an approach can lead to synchronization issues (especially when applied in the context of numerous IoT devices), the network and the server may turn into bottlenecks, and the load may become unsustainable for some nodes. To solve this issue and reduce the traffic on the network, in this paper, we propose P4FL , a novel FL architecture that uses the paradigm of network programmability to program P4 switches to compute intermediate aggregations. In particular, we defined a custom in-band protocol based on MPLS to carry the model parameters and adapted the P4 switch behavior to aggregate model gradients. We then evaluated P4FL in Mininet and verified that using network nodes for in-network model caching and gradient aggregating has two advantages: first, it alleviates the bottleneck effect of the central FL server; second, it further accelerates the entire training progress

    Howdah: Load Profiling via In-Band Flow Classification and P4

    Get PDF
    The challenges of managing datacenter traffic increase with the complexity and variety of new Internet and Web applications. Efficient network management systems are often required to thwart delays and minimize failures. In this regard, it appears helpful to identify in advance the different classes of flows that (co)exist in the network, characterizing them into different types according to the different latency/bandwidth requirements. In this paper, we propose Howdah, a traffic identification and profiling mechanism that uses Machine Learning and a congestion-aware forwarding strategy to offer adaptation to different traffic classes with the support of programmable data-planes. With Howdah, sender and gateway elements inject in-band traffic information obtained using supervised learning. When a switch or a router receives a packet, it exploits such host-based traffic classification to adapt to a desirable traffic profile, for example, balancing the load. We compare our solutions against recent traffic engineering solutions and show the efficacy of cooperation between host traffic classification and P4-based switch forwarding policies, reducing packet transmission time in datacenter scenarios

    Applying Natural Language Processing techniques to analyze HIV-related discussions on Social Media

    No full text
    Nowadays social media are being used to monitor the progress of viruses and share important prevention and treatment information. This has also allowed the creation of a community of people united by the same disease, to give themselves strength, comfort and advice. The objective of this work is to extract and understand discussions about HIV on a popular social media platform: Twitter, a micro-blogging application. Tweets with the hashtag #HIV were collected in the date range of one year, starting from November 12th 2018 to November 12th 2019. They were then filtered and cleaned using NLP techniques, which allowed the removal of duplicates, non-english texts and useless information, such as tweets only containing urls, mentions or hashtags. After the cleaning phase, the main analyzes carried out were sentiment analysis and content analysis which, using data mining and text mining algorithms were able to reveal their emotions and the most influential topics written about HIV. This study illustrates the potential of using social media to analyse the spread of viruses and health conditions using two types of analyses for the same topic and dataset: sentimental analysis and content analysis. HIV-related messages were used by organisations and credible sources to disseminate information about treatment and prevention, but also by individual users to share their thoughts, emotions and experiences of living with HIV. Twitter is also used by celebrities and health authorities to respond to public concerns. This work shows that many tweets are written for the purpose of giving information and emotional support with assistance from the online community and also for health care professionals who support individuals living with HIV/AIDS. The algorithms and notions covered in this work can subsequently be used by the public health community or data scientists to analyze tweets regarding other viruses or diseases, showing how social media can be used to identify, detect and study outbreaks in a specific geographical area and in a specific period of time

    Characterizing HIV discussions and engagement on Twitter

    Get PDF
    Publisher Copyright: © 2021, The Author(s).The novel settings provided by social media facilitate users to seek and share information on a wide array of subjects, including healthcare and wellness. Analyzing health-related opinions and discussions on these platforms complement traditional public health surveillance systems to support timely and effective interventions. This study aims to characterize the HIV-related conversations on Twitter by identifying the prevalent topics and the key events and actors involved in these discussions. Through Twitter API, we collected tweets containing the hashtag #HIV for a one-year period. After pre-processing the collected data, we conducted engagement analysis, temporal analysis, and topic modeling algorithm on the analytical sample (n = 122,807). Tweets by HIV/AIDS/LGBTQ activists and physicians received the highest level of engagement. An upsurge in tweet volume and engagement was observed during global and local events such as World Aids Day and HIV/AIDS awareness and testing days for trans-genders, blacks, women, and the aged population. Eight topics were identified that include “stigma”, “prevention”, “epidemic in the developing countries”, “World Aids Day”, “treatment”, “events”, “PrEP”, and “testing”. Social media discussions offer a nuanced understanding of public opinions, beliefs, and sentiments about numerous health-related issues. The current study reports various dimensions of HIV-related posts on Twitter. Based on the findings, public health agencies and pertinent entities need to proactively use Twitter and other social media by engaging the public through involving influencers. The undertaken methodological choices may be applied to further assess HIV discourse on other popular social media platforms.Peer reviewe

    RLVNA: a Platform for Experimenting with Virtual Networks Adaptations over Public Testbeds

    No full text
    Network emulators and simulation environments traditionally support computer networking and distributed system research. The continued use of multiple approaches highlights both the value and inadequacy of each approach. To this end, several large-scale virtual networks testbeds, such as GENI and CloudLab, have emerged, allowing testing of a networked system in controlled yet realistic environments, focusing in particular on facilitating the test of network management schema in Software-Defined Network (SDN) scenarios. Nevertheless, setting up those experiments first and integrating machine learning models later in these deployments is challenging. In this paper, we propose designing and implementing a web-based platform that integrates Reinforcement Learning (RL)-based models with a virtual network experiment using resources acquired within a real-world testbed, e.g., GENI. Users are able to reserve the network resources (links, switches, and hosts) and configure them through our intuitive interface with little effort. The RL algorithm is then launched to learn how to steer traffic dynamically and according to diverse traffic network conditions. Such a model can be easily customized by the user, while our architecture enables fast reprogramming of the Open Virtual Switches via the SDN controller instantiated. We experimented with trace-based traffic to validate this user-friendly platform and evaluated how centralized and decentralized RL algorithms can effectively lead to self-driving networks. While in this paper, the system focuses on the deployment of experiments for virtual network adaptation, the platform can be easily extended to other network management mechanisms and machine learning algorithms
    corecore